robust adversarial reinforcement learning
Review for NeurIPS paper: On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems
Additional Feedback: Overall, I have a bit negative opinion of the paper. My main concerns include: 1, the related work is not well discussed. Authors define robust stability condition, which essentially makes a critical intermediate term in analysis easy to deal with. A more reasonable assumption should be imposed on A,B,C. Post rebuttal 1, some potential references about RARL that should be included are: Extending robust adversarial reinforcement learning considering adaptation and diversity, Shioya et al 2018; Adversarial Reinforcement Learning-based Robust Access Point Coordination Against Uncoordinated Interference, Kihira et al 2020; Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient, Li et al 2019; Policy-Gradient Algorithms Have No Guarantees of Convergence in Linear Quadratic Games, Mazumdar et al 2019; Policy Iteration for Linear Quadratic Games With Stochastic Parameters, Gravell et al 2020; Risk averse robust adversarial reinforcement learning, Pan et al 2019; Online robust policy learning in the presence of unknown adversaries, Havens et al 2018.
Review for NeurIPS paper: On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems
This paper studies a recent method on Robust Adversarial Reinforcement Learning (RARL) by Pinto et al in the linear quadratic setting (linear dynamics, quadratic cost function), which is a typical starting point in the analysis of optimal control algorithms. The paper examines the stabilization behavior of the linear controller, showing that RARL in the simplified linear quadratic setting shows instabilities. The paper proposes a new formulation of RARL in the linear quadratic setting, which can inform solutions in the nonlinear setting, and provides stability guarantees for the proposed method. In the post rebuttal discussion 3/4 reviewers evaluated the paper highly and recommended that the paper be accepted. I agree that the paper makes a significant and interesting enough contribution in terms of pointing out the instabilities of RARL and addressing them in the linear quadratic setting, which in my view is sufficient for publication at NeurIPS.
On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems
Reinforcement learning (RL) algorithms can fail to generalize due to the gap between the simulation and the real world. One standard remedy is to use robust adversarial RL (RARL) that accounts for this gap during the policy training, by modeling the gap as an adversary against the training agent. We first observe that the popular RARL scheme that greedily alternates agents' updates can easily destabilize the system. Motivated by this, we propose several other policy-based RARL algorithms whose convergence behaviors are then studied both empirically and theoretically. We find: i) the conventional RARL framework (Pinto et al., 2017) can learn a destabilizing policy if the initial policy does not enjoy the robust stability property against the adversary; and ii) with robustly stabilizing initializations, our proposed double-loop RARL algorithm provably converges to the global optimal cost while maintaining robust stability on-the-fly.